Expectation Maximization: a Gentle Introduction
نویسنده
چکیده
This tutorial was basically written for students/researchers who want to get into first touch with the Expectation Maximization (EM) Algorithm. The main motivation for writing this tutorial was the fact that I did not find any text that fitted my needs. I started with the great book “Artificial Intelligence: A modern Approach”of Russel and Norvig [6], which provides lots of intuition, but I was disappointed that the general explication of the EM algorithm did not directly fit the shown examples. There is a very good tutorial written by Je↵ Bilmes [1]. However, I felt that some parts of the mathematical derivation could be shortened by another choice of the hidden variable (explained later on). Finally I had a look at the book “The EM Algorithm and Extensions” by McLachlan and Krishnan [4]. I can definitely recommend this book to anybody involved with the EM algorithm though, for a beginner, some steps could be more elaborate. So, I ended up by writing my own tutorial. Any feedback is greatly appreciated (error corrections, spelling/grammar mistakes, etc.).
منابع مشابه
Mixture Models and Expectation-Maximization
This tutorial attempts to provide a gentle introduction to EM by way of simple examples involving maximum-likelihood estimation of mixture-model parameters. Readers familiar with ML paramter estimation and clustering may want to skip directly to Sections 5.2 and 5.3.
متن کاملA Gentle Introduction to the Em Algorithm Part I Theory
Introduction My aim is to introduce the Expectation Maximization EM algorithm to you especially some of its theory I will skip proofs but I will derive many formulae that have practical use The EM algorithm is iterative and you should be familiar with its convergence properties I will discuss them in detail I will present applications of the EM algorithm to signal and image processing in a comp...
متن کاملConvexity, Maximum Likelihood and All That
This note is meant as a gentle but comprehensive introduction to the expectation-maximization (EM) and improved iterative scaling (IIS) algorithms, two popular techniques in maximum likelihood estimation. The focus in this tutorial is on the foundation common to the two algorithms: convex functions and their convenient properties. Where examples are called for, we draw from applications in huma...
متن کاملAn Improved EM algorithm
In this paper, we firstly give a brief introduction of expectation maximization (EM) algorithm, and then discuss the initial value sensitivity of expectation maximization algorithm. Subsequently, we give a short proof of EM's convergence. Then, we implement experiments with the expectation maximization algorithm (We implement all the experiments on Gaussion mixture model (GMM) ). Our experiment...
متن کاملA General Framework For Task-Oriented Network Inference
We present a brief introduction to a flexible, general network inference framework which models data as a network space, sampled to optimize network structure to a particular task. We introduce a formal problem statement related to influence maximization in networks, where the network structure is not given as input, but learned jointly with an influence maximization solution.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008